Builds and puppets
Many non-trivial PHP projects require a specific build and deployment process. Composer dependencies need to be installed, JavaScript and CSS files to be compiled and since we are proud to provide our customers an uninterrupted service, just uploading the application via FTP is out of question. Besides, many applications don’t just consist of Apache / Nginx, PHP and MySQL anymore. You’ll want some helper services here, some Elasticsearch there, a Redis cache and an HAproxy. All of these services need to be orchestrated and you don’t want to manually set them all up and document the steps in a text file. Or on a post-it.
We addressed the build phase by introducing continuous integration servers, like our good old friend Jenkins. Previously, packaging an application for shipping was a tenuous operation, requiring tranquility like for disarming a rusty bomb. Now it’s a matter of pushing a button (or sending a command in a group chat) which produces reliable results. And that also means: it reliably produces errors – which is a good thing, because they are easier to spot than random faults you cannot reproduce.
On the server side things became more predictable as well: with tools like Puppet, Chef and Ansible we started to automate our infrastructure. Instead of logging into a shell and apt-getting packages manually, copying configuration snippets from existing servers and spending an hour on finding a friendly name for your new server (because you’ll be spending some part of your lifetime together), you could now just start up a virtual server, run a script and have a good coffee over watching your little Minions to create the environment.
Infrastructure as Code
But things only started becoming really interesting when “Infrastructure as Code” (IaC) became a thing. Instead of just writing scripts which set up a server, we started to consider the whole infrastructure – including servers, load balancers, storage and firewalls – as software. And since software is usually kept in version control systems like Git, you can effectively reproduce all of the infrastructure you need for your application to run.
Okay, nice. But what’s the deal with containers?
Even though we can wrap the build process into scripts and describe the infrastructure with tools like Terraform there’s a whole lot of non-standardized work to be done in order to get someone’s application up and running. When the cargo industry started to put goods into boxes after World War II, they were much easier to transport and greatly reduced the expenses and increased the speed. But only the standardization of container dimensions in the late 1960s brought the break-through in international trade and its resulting standard, ISO 668, was a major factor in starting globalization of trade. It’s not about the box, it’s about the standard.
But how does that apply to me when I ship applications and not coffee?
Containerization in software can have similar effects like it had for trade. It simplifies the processes to ship an application to its users, it makes testing and production more predictable and it streamlines the conversation about the infrastructural needs of your software. Containerized applications require new tools, new processes and probably a new developer mindset.
Even though we run containers in production already, we are not quite there yet when it comes to picking a random application and deploying it to an arbitrary infrastructure. At the least, it’s exciting times for a developer to live in. And the container standard yet has to be agreed upon – I don’t think that it will be today’s Docker.